Goto

Collaborating Authors

 adversarial example detector



we address some of the questions raised by the reviewers as much as time and space allows

Neural Information Processing Systems

First, we thank all the reviewers for their invaluable assessment of our paper in this challenging time. To provide more reliable evidence that AdvFlow's distributional For the sake of completeness, we also add LID [31] The results are given in Table 1. This is indicating that the attacker's distributional properties are fooling the detectors. As seen, we get similar results to Table 2 of the paper, outperforming SimBA in defended baselines. Note that some of the current SOT A results in black-box adversarial attacks come from the attacker's knowledge about the However, once the target changes its training procedure (e.g., from vanilla See the official repo. of SimBA, where it clearly is indicated that the The results of Table 1 and 2 (as well as SVHN) will be added to the camera-ready version.





New Adversarial Image Detection Based on Sentiment Analysis

Wang, Yulong, Li, Tianxiang, Li, Shenghong, Yuan, Xin, Ni, Wei

arXiv.org Artificial Intelligence

Deep Neural Networks (DNNs) are vulnerable to adversarial examples, while adversarial attack models, e.g., DeepFool, are on the rise and outrunning adversarial example detection techniques. This paper presents a new adversarial example detector that outperforms state-of-the-art detectors in identifying the latest adversarial attacks on image datasets. Specifically, we propose to use sentiment analysis for adversarial example detection, qualified by the progressively manifesting impact of an adversarial perturbation on the hidden-layer feature maps of a DNN under attack. Accordingly, we design a modularized embedding layer with the minimum learnable parameters to embed the hidden-layer feature maps into word vectors and assemble sentences ready for sentiment analysis. Extensive experiments demonstrate that the new detector consistently surpasses the state-of-the-art detection algorithms in detecting the latest attacks launched against ResNet and Inception neutral networks on the CIFAR-10, CIFAR-100 and SVHN datasets. The detector only has about 2 million parameters, and takes shorter than 4.6 milliseconds to detect an adversarial example generated by the latest attack models using a Tesla K80 GPU card.


AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows

Dolatabadi, Hadi M., Erfani, Sarah, Leckie, Christopher

arXiv.org Machine Learning

Deep learning classifiers are susceptible to well-crafted, imperceptible variations of their inputs, known as adversarial attacks. In this regard, the study of powerful attack models sheds light on the sources of vulnerability in these classifiers, hopefully leading to more robust ones. In this paper, we introduce AdvFlow: a novel black-box adversarial attack method on image classifiers that exploits the power of normalizing flows to model the density of adversarial examples around a given target image. We see that the proposed method generates adversaries that closely follow the clean data distribution, a property which makes their detection less likely. Also, our experimental results show competitive performance of the proposed approach with some of the existing attack methods on defended classifiers. The code is available at https://github.com/hmdolatabadi/AdvFlow.